1,039 research outputs found
Using a Laplace approximation to estimate the random coefficients logit model by non-linear least squares
Current methods of estimating the random coefficients logit model employ simulations of the distribution of the taste parameters through pseudo-random sequences. These methods suffer from difficulties in estimating correlations between parameters and computational limitations such as the curse of dimensionality. This paper provides a solution to these problems by approximating the integral expression of the expected choice probability using a multivariate extension of the Laplace approximation. Simulation results reveal that our method performs very well, both in terms of accuracy and computational time. This paper is a revised version of CWP01/06.
Using a Laplace approximation to estimate the random coefficients logit model by non-linear least squares
Current methods of estimating the random coefficients logit model employ simulations of the distribution of the taste parameters through pseudo-random sequences. These methods suffer from difficulties in estimating correlations between parameters and computational limitations such as the curse of dimensionality. This paper provides a solution to these problems by approximating the integral expression of the expected choice probability using a multivariate extension of the Laplace approximation. Simulation results reveal that our method performs very well, both in terms of accuracy and computational time.
Stochastic IMT (insulator-metal-transition) neurons: An interplay of thermal and threshold noise at bifurcation
Artificial neural networks can harness stochasticity in multiple ways to
enable a vast class of computationally powerful models. Electronic
implementation of such stochastic networks is currently limited to addition of
algorithmic noise to digital machines which is inherently inefficient; albeit
recent efforts to harness physical noise in devices for stochasticity have
shown promise. To succeed in fabricating electronic neuromorphic networks we
need experimental evidence of devices with measurable and controllable
stochasticity which is complemented with the development of reliable
statistical models of such observed stochasticity. Current research literature
has sparse evidence of the former and a complete lack of the latter. This
motivates the current article where we demonstrate a stochastic neuron using an
insulator-metal-transition (IMT) device, based on electrically induced
phase-transition, in series with a tunable resistance. We show that an IMT
neuron has dynamics similar to a piecewise linear FitzHugh-Nagumo (FHN) neuron
and incorporates all characteristics of a spiking neuron in the device
phenomena. We experimentally demonstrate spontaneous stochastic spiking along
with electrically controllable firing probabilities using Vanadium Dioxide
(VO) based IMT neurons which show a sigmoid-like transfer function. The
stochastic spiking is explained by two noise sources - thermal noise and
threshold fluctuations, which act as precursors of bifurcation. As such, the
IMT neuron is modeled as an Ornstein-Uhlenbeck (OU) process with a fluctuating
boundary resulting in transfer curves that closely match experiments. As one of
the first comprehensive studies of a stochastic neuron hardware and its
statistical properties, this article would enable efficient implementation of a
large class of neuro-mimetic networks and algorithms.Comment: Added sectioning, Figure 6, Table 1, and Section II.E Updated
abstract, discussion and corrected typo
Inherent Weight Normalization in Stochastic Neural Networks
Multiplicative stochasticity such as Dropout improves the robustness and
generalizability of deep neural networks. Here, we further demonstrate that
always-on multiplicative stochasticity combined with simple threshold neurons
are sufficient operations for deep neural networks. We call such models Neural
Sampling Machines (NSM). We find that the probability of activation of the NSM
exhibits a self-normalizing property that mirrors Weight Normalization, a
previously studied mechanism that fulfills many of the features of Batch
Normalization in an online fashion. The normalization of activities during
training speeds up convergence by preventing internal covariate shift caused by
changes in the input distribution. The always-on stochasticity of the NSM
confers the following advantages: the network is identical in the inference and
learning phases, making the NSM suitable for online learning, it can exploit
stochasticity inherent to a physical substrate such as analog non-volatile
memories for in-memory computing, and it is suitable for Monte Carlo sampling,
while requiring almost exclusively addition and comparison operations. We
demonstrate NSMs on standard classification benchmarks (MNIST and CIFAR) and
event-based classification benchmarks (N-MNIST and DVS Gestures). Our results
show that NSMs perform comparably or better than conventional artificial neural
networks with the same architecture
Recommended from our members
Targeting the Most Harmful Offenders for an English Police Agency: Continuity and Change of Membership in the “Felonious Few”
Funder: University of CambridgeAbstract
Research Question
How concentrated is the total harm of offences with detected offenders (identified suspects) among the complete list of all detected offenders in a given year in an English police agency, and how consistent is the list of highest-harm “felonious few” offenders from one year to the next?
Data
Characteristics of 327,566 crimes and 39,545 unique offenders as recorded by Northamptonshire Police in 7 years from 2010 to 2016 provide the basis for this analysis.
Methods
Crime and offender records were matched to harm weightings derived from the Cambridge Crime Harm Index (Sherman et al. 2016a; Sherman et al., Policing, 10(3), 171–183, 2016b). Descriptive statistics summarize a concentration of harm identifying the felonious few, changes over time in membership of the “few”, offender typologies and tests for escalation of severity, frequency and intermittency across repeated offences.
Findings
Crime harm is much more concentrated among offenders than crime volume: 80% of crime harm that is identified to an offender is linked to a felonious few of just 7% of all detected offenders. While chronic repeat offenders are the majority contributors to harm totals of this group, those with the most general range of offence types contribute the most harm. Individual members of the felonious few rarely maintain that position year on year; over 95% of each year’s list is composed of individuals not present in previous years. Within individual crime histories, we observe a pattern of de-escalation in crime harm per offence over time. “One-time” offenders, those with just one crime record, typically made up a third of the felonious few in both number and harm contribution.
Conclusions
These findings demonstrate the potential to target a small number of repeat offenders for harm reduction strategies using a metric of total crime severity, not just volume, despite a substantial portion of crime harm caused by one-time offenders that may be largely unpredictable.
</jats:sec
Retrieval of Life Affirming Values and their Incorporation into a Suicidality Prevention Plan
Abstract. This article is intended primarily as a companion piece to provide additional background and illustration for a submission by the same authors to The Journal of the American Academy of Child and Adolescent Psychiatry. It is also the second in a series appearing in Conscience Works to characterize recently employed techniques to render psychiatric treatment of children and adolescents in a conscience sensitive manner. It consists of a progressive Case Presentation interwoven with Discussion points, which together demonstrate the retrieval of life affirming values in the context of suicidality management and the incorporation of these values in an overall suicidality prevention plan
13kW Advanced Electric Propulsion Flight System Development and Qualification
The next phase of robotic and human deep space exploration missions requires high performance, high power solar electric propulsion systems for large-scale science missions and cargo transportation. Aerojet Rocketdyne's Advanced Electric Propulsion System (AEPS) program is completing development and qualification of a 13kW flight EP system to support NASA exploration. The first use of the AEPS is planned for the NASA Power & Propulsion Element, which is the first element of NASA's cis-lunar Gateway. The flight AEPS system includes a magnetically shielded long-life Hall thruster, power processing unit (PPU), and xenon flow controller (XFC). The Hall thruster, originally developed and demonstrated by NASA's Glenn Research Center and the Jet Propulsion Laboratory, operates at input powers up to 13.3kW while providing a specific impulse over 2600s at an input voltage of 600V. The power processor is designed to accommodate an input voltage range of 95 to 140V, consistent with operation beyond the orbit of Mars. The integrated system is continuously throttleable between 3 and 13.3kW. The program has completed testing of the Technology Development Units and is progressing into the Engineering Development Unit test phase and the final design phase to Critical Design Review (CDR). This paper will present the high power AEPS system capabilities, overall program and design status and the latest test results for the 13kW flight system development as well as the plans for the development and qualification effort of the EP string
High-resolution broadband spectroscopy using externally dispersed interferometry at the Hale telescope: part 2, photon noise theory
High-resolution broadband spectroscopy at near-infrared (NIR) wavelengths (950 to 2450 nm) has been performed using externally dispersed interferometry (EDI) at the Hale telescope at Mt. Palomar, with the TEDI interferometer mounted within the central hole of the 200-in. primary mirror in series with the comounted TripleSpec NIR echelle spectrograph. These are the first multidelay EDI demonstrations on starlight. We demonstrated very high (10×) resolution boost and dramatic (20× or more) robustness to point spread function wavelength drifts in the native spectrograph. Data analysis, results, and instrument noise are described in a companion paper (part 1). This part 2 describes theoretical photon limited and readout noise limited behaviors, using simulated spectra and instrument model with noise added at the detector. We show that a single interferometer delay can be used to reduce the high frequency noise at the original resolution (1× boost case), and that except for delays much smaller than the native response peak half width, the fringing and nonfringing noises act uncorrelated and add in quadrature. This is due to the frequency shifting of the noise due to the heterodyning effect. We find a sum rule for the noise variance for multiple delays. The multiple delay EDI using a Gaussian distribution of exposure times has noise-to-signal ratio for photon-limited noise similar to a classical spectrograph with reduced slitwidth and reduced flux, proportional to the square root of resolution boost achieved, but without the focal spot limitation and pixel spacing Nyquist limitations. At low boost (∼1×) EDI has ∼1.4× smaller noise than conventional, and at >10× boost, EDI has ∼1.4× larger noise than conventional. Readout noise is minimized by the use of three or four steps instead of 10 of TEDI. Net noise grows as step phases change from symmetrical arrangement with wavenumber across the band. For three (or four) steps, we calculate a multiplicative bandwidth of 1.8:1 (2.3:1), sufficient to handle the visible band (400 to 700 nm, 1.8:1) and most of TripleSpec (2.6:1)
- …